EN FR
EN FR
STARS - 2015
Overall Objectives
Bilateral Contracts and Grants with Industry
Bibliography
Overall Objectives
Bilateral Contracts and Grants with Industry
Bibliography


Section: New Results

Introduction

This year Stars has proposed new results related to its three main research axes : perception for activity recognition, semantic activity recognition and software engineering for activity recognition.

Perception for Activity Recognition

Participants : Julien Badie, Slawomir Bak, Piotr Bilinski, François Brémond, Duc Phu Chau, Etienne Corvée, Antitza Dancheva, Kanishka Nithin Dhandapani, Carolina Garate, Furqan Muhammad Khan, Michal Koperski, Thi Lan Anh Nguyen, Javier Ortiz, Ujjwal Ujjval.

The new results for perception for activity recognition are:

  • Pedestrian Detection using Convolutional Neural Networks (see   7.2 )

  • Head detection for eye tracking application (see 7.3 )

  • Minimizing hallucination in Histogram of Oriented Gradients (see   7.4 )

  • Hybrid approaches for Gender estimation (see 7.5 )

  • Automated Healthcare: Facial-expression-analysis for Alzheimer's patients in musical mnemotherapy (see 7.6 )

  • Robust Global Tracker based on an Online Estimation of Tracklet Descriptor Reliability (see 7.7 )

  • Optimizing people tracking for a video-camera network (see   7.8 )

  • Multi-camera Multi-object Tracking and Trajectory Fusion (see 7.9 )

  • Person Re-Identification in Real-World Surveillance Systems (see 7.10 )

  • Human Action Recognition in Videos (see 7.11 )

Semantic Activity Recognition

Participants : Vasanth Bathrinarayanan, François Brémond, Duc Phu Chau, Serhan Cosar, Alvaro Gomez Uria Covella, Carlos Fernando Crispim Junior, Ramiro Leandro Diaz, Giuseppe Donatielo, Baptiste Fosty, Carolina Garate, Alexandra Koenig, Michal Koperski, Farhood Negin, Thanh Hung Nguyen, Min Kue Phan Tran, Philippe Robert.

For this research axis, the contributions are :

  • Evaluation of Event Recognition without using Ground Truth (see 7.12 )

  • Semantic Event Fusion of Different Visual Modality Concepts for Activity Recognition (see 7.13 )

  • Semi-supervised activity recognition using high-order temporal-composite patterns of visual concepts (see 7.14 )

  • From activity recognition to the assessment of seniors' autonomy (see 7.15 )

  • Serious Games Interfaces using an RGB-D camera (see   7.16 )

  • Assistance for Older Adults in Serious Game using an Interactive System (see 7.17 )

  • Generating Unsupervised Models for Online Long-Term Daily Living Activity Recognition (see 7.18 )

Software Engineering for Activity Recognition

Participants : Sabine Moisan, Annie Ressouche, Jean-Paul Rigault, Ines Sarray, Imane Khalis, Nazli Temur, Daniel Gaffé, Rachid Guerchouche, Matias Marin, Etienne Corvée, Carolina Da Silva Gomes Crispim, Anais Ducoffe, Jean Yves Tigli, François Brémond.

The contributions for this research axis are:

  • Run-time Adaptation of Video Systems (see 7.19 )

  • Scenario Description Language (see 7.20 )

  • Scenario Recognition (see 7.21 )

  • The Clem Workflow (see 7.22 )

  • Safe Composition in WComp Middleware for Internet of Things (see 7.23 )

  • Design of UHD panoramic video camera (see 7.24 )

  • Brick & Mortar Cookies (see 7.25 )

  • Monitoring Older People Experiments (see 7.26 )